机器人的集体操作,例如以团队或群为单位运行的无人机(UAV),受其个人功能的影响,这反过来又取决于其物理设计,也就是形态。但是,除了一些(尽管临时)进化机器人技术方法外,在理解形态和集体行为的相互作用方面几乎没有工作。特别缺乏计算框架来同时寻找机器人形态和其行为模型的超参数,这些模型共同优化了集体(团队)绩效。为了解决这一差距,本文提出了一个新的共同设计框架。在这里,通过新颖的``人才''指标有效地缓解了原本嵌套的形态/行为共同设计的爆炸计算成本;同时,与典型的亚最佳顺序形态$ \ $ $ to $ $ to Craging to $行为设计相比,还允许明显更好的解决方案方法。该框架包括四个主要步骤:人才指标的选择,人才帕累托探索(多目标形态优化过程),行为优化和形态学最终确定。通过将其应用于设计无用的无人机,可以证明这种共同设计的概念团队本地化信号源,例如在受害者搜索和危害本地化中。在这里,集体行为是由最近报道的批评贝叶斯搜索算法的驱动的,称为贝叶斯 - 工作。我们的案例研究表明,共同设计的结果可显着更高的成功。与基线设计相比,信号源定位的速率,各种信号环境和团队6至15个无人机。此外,与预测的嵌套设计方法相比,该共同设计过程提供了两个降低计算时间的数量级。
translated by 谷歌翻译
从Chaser Spacecraft发射的系绳网提供了有希望的方法,可以在轨道中捕获和处理大型空间碎片。该系绳网络系统受到影响和致动的几种不确定性来源,影响其净爆发和关闭控制的性能。然而,设计控制动作的早期可靠性的优化方法仍然具有挑战性,并计算到相对于追逐者相对于追逐者的不同发射方案和目标(碎片)状态概括。为了搜索一般和可靠的控制策略,本文介绍了一种加强学习框架,它集成了具有净动力学模拟的近端策略优化(PPO2)方法。后者允许评估基于网络的目标捕获的剧集,并估算捕获质量索引,作为PPO2的奖励反馈。在这里,在任何给定的发射方案下,学习的策略旨在根据移动网和目标的状态来模拟网络结束动作的定时。考虑了随机状态转换模型,以便在国家估算和发射致动中纳入合成不确定性。随着培训期间的显着奖励改进,训练有素的策略表明捕获性能(在广泛的发射/目标场景范围内),接近基于可靠性的优化在各个方案上运行。
translated by 谷歌翻译
Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to selfattention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, HaloNets, which reach state-of-the-art accuracies on the parameterlimited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.
translated by 谷歌翻译